11 research outputs found

    D2-Net: A Trainable CNN for Joint Detection and Description of Local Features

    Full text link
    In this work we address the problem of finding reliable pixel-level correspondences under difficult imaging conditions. We propose an approach where a single convolutional neural network plays a dual role: It is simultaneously a dense feature descriptor and a feature detector. By postponing the detection to a later stage, the obtained keypoints are more stable than their traditional counterparts based on early detection of low-level structures. We show that this model can be trained using pixel correspondences extracted from readily available large-scale SfM reconstructions, without any further annotations. The proposed method obtains state-of-the-art performance on both the difficult Aachen Day-Night localization dataset and the InLoc indoor localization benchmark, as well as competitive performance on other benchmarks for image matching and 3D reconstruction.Comment: Accepted at CVPR 201

    On Challenges of Image Features for Cloud-Based Localization and Mapping

    No full text
    Visual Localization and Mapping is a long-standing problem in Computer Vision with high relevance in Robotics and Augmented Reality (AR) applications. Most state-of-the-art approaches build upon image features due to their efficiency, versatility, and scalability. Despite tremendous progress by the community over the last decades, recent benchmarks show that current methods still struggle with robustness and accuracy on real-world data. This has a negative impact on downstream applications in terms of AR user experience or performance of autonomous robots. To overcome these on-device limitations, in conjunction with the advent of internet connected mobile devices, visual localization and mapping capabilities have been increasingly offloaded to the cloud. In addition, this provides new opportunities, such as data crowd-sourcing, and enables new application scenarios, like collaborative experiences. However, it also raises new challenges, notably guaranteeing the protection of users' privacy and maintaining compatibility of previously built maps in the face of hardware or software updates. These challenges severely hinder the ease-of-development of the technology and its widespread adoption. In this thesis, we focus on image features and provide solutions for some of their limitations, both for the on-device and cloud scenarios. First, we propose a method for refining the 2D positions of local features, without knowledge of scene or camera geometry, which improves the quality of maps and the localization accuracy. Second, we propose a privacy-preserving feature representation allowing matching for the end-goal task while concealing the contents of the original image. Third, we propose an approach for cross-descriptor matching enabling forward and backward compatibility: migration of old maps to recent features and localization with old features inside recent maps. This thesis raises awareness of some practical challenges of image features and makes a step towards solving the underlying issues

    Argument Mining on Twitter: Arguments, Facts and Sources

    No full text
    International audienc

    Argument Mining on Twitter: Arguments, Facts and Sources

    No full text
    International audienceSocial media collect and spread on the Web personal opinions, facts, fake news and all kind of information users may be interested in. Applying argument mining methods to such heterogeneous data sources is a challenging open research issue, in particular considering the peculiarities of the language used to write textual messages on social media. In addition, new issues emerge when dealing with arguments posted on such platforms, such as the need to make a distinction between personal opinions and actual facts, and to detect the source disseminating information about such facts to allow for provenance verification. In this paper, we apply supervised classification to identify arguments on Twitter, and we present two new tasks for argument mining, namely facts recognition and source identification. We study the feasibility of the approaches proposed to address these tasks on a set of tweets related to the Grexit and Brexit news topics

    D2-Net: A Trainable CNN for Joint Description and Detection of Local Features

    No full text
    In this work we address the problem of finding reliable pixel-level correspondences under difficult imaging conditions. We propose an approach where a single convolutional neural network plays a dual role: It is simultaneously a dense feature descriptor and a feature detector. By postponing the detection to a later stage, the obtained keypoints are more stable than their traditional counterparts based on early detection of low-level structures. We show that this model can be trained using pixel correspondences extracted from readily available large-scale SfM reconstructions, without any further annotations. The proposed method obtains state-of-the-art performance on both the difficult Aachen Day-Night localization dataset and the InLoc indoor localization benchmark, as well as competitive performance on other benchmarks for image matching and 3D reconstruction

    D2-Net: A Trainable CNN for Joint Detection and Description of Local Features

    No full text
    Accepted at CVPR 2019International audienceIn this work we address the problem of finding reliable pixel-level correspondences under difficult imaging conditions. We propose an approach where a single convolutional neural network plays a dual role: It is simultaneously a dense feature descriptor and a feature detector. By postponing the detection to a later stage, the obtained keypoints are more stable than their traditional counterparts based on early detection of low-level structures. We show that this model can be trained using pixel correspondences extracted from readily available large-scale SfM reconstructions, without any further annotations. The proposed method obtains state-of-the-art performance on both the difficult Aachen Day-Night localization dataset and the InLoc indoor localization benchmark, as well as competitive performance on other benchmarks for image matching and 3D reconstruction
    corecore